28 research outputs found

    The Region of (In)Stability of a 2-Delay Equation Is Connected

    Get PDF

    Pesticide effects on body temperature of torpid/hibernating rodents (Peromyscus leucopus and Spermophilus tridecemlineatus)

    Get PDF
    Environmental contaminants have been shown in the lab to alter thyroid hormone concentrations. Despite the role these hormones play in the physiological ecology of small mammals, no one has investigated the possible effects of thyroid-disrupting chemicals on mammalian thermal ecology and thermoregulatory ability. Because the energetic impact of such a disruption is likely to be most dramatic during times already energetically stressful, we investigated the effects of two common pesticides (atrazine and lindane) on the use of daily torpor in white-footed mice, and the use of hibernation in 13-lined ground squirrels. Fortunately, we found that these strategies for over-wintering success were not impaired

    DLBricks: Composable Benchmark Generation to Reduce Deep Learning Benchmarking Effort on CPUs (Extended)

    Full text link
    The past few years have seen a surge of applying Deep Learning (DL) models for a wide array of tasks such as image classification, object detection, machine translation, etc. While DL models provide an opportunity to solve otherwise intractable tasks, their adoption relies on them being optimized to meet latency and resource requirements. Benchmarking is a key step in this process but has been hampered in part due to the lack of representative and up-to-date benchmarking suites. This is exacerbated by the fast-evolving pace of DL models. This paper proposes DLBricks, a composable benchmark generation design that reduces the effort of developing, maintaining, and running DL benchmarks on CPUs. DLBricks decomposes DL models into a set of unique runnable networks and constructs the original model's performance using the performance of the generated benchmarks. DLBricks leverages two key observations: DL layers are the performance building blocks of DL models and layers are extensively repeated within and across DL models. Since benchmarks are generated automatically and the benchmarking time is minimized, DLBricks can keep up-to-date with the latest proposed models, relieving the pressure of selecting representative DL models. Moreover, DLBricks allows users to represent proprietary models within benchmark suites. We evaluate DLBricks using 5050 MXNet models spanning 55 DL tasks on 44 representative CPU systems. We show that DLBricks provides an accurate performance estimate for the DL models and reduces the benchmarking time across systems (e.g. within 95%95\% accuracy and up to 4.4Γ—4.4\times benchmarking time speedup on Amazon EC2 c5.xlarge)

    Neural Architecture Search: Insights from 1000 Papers

    Full text link
    In the past decade, advances in deep learning have resulted in breakthroughs in a variety of areas, including computer vision, natural language understanding, speech recognition, and reinforcement learning. Specialized, high-performing neural architectures are crucial to the success of deep learning in these areas. Neural architecture search (NAS), the process of automating the design of neural architectures for a given task, is an inevitable next step in automating machine learning and has already outpaced the best human-designed architectures on many tasks. In the past few years, research in NAS has been progressing rapidly, with over 1000 papers released since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized and comprehensive guide to neural architecture search. We give a taxonomy of search spaces, algorithms, and speedup techniques, and we discuss resources such as benchmarks, best practices, other surveys, and open-source libraries
    corecore